Learning Stochastically Evolving Networks via Local Probing
نویسندگان
چکیده
We consider the problem of learning the state of dynamic network with vertex values that are perturbed over time. Our only interface to the network comes in the form of point probes, where we are allowed to query local information about the state of a specific point in the network. In this paper, we consider network models where the values of the vertices are perturbed uniformly at random at every time step, and where we may only query adjacent vertices to determine whether or not they have the same value. This model is drawn from numerous practical examples, where determining the precise value of a vertex is impossible, but differentiating between adjacent vertices with disagreeing values is feasible. Under this model we consider both the noiseless and noisy cases, where in the latter our probes are subject to a uniform noise with probability α. We first derive a clear inverse linear lower bound tradeoff between the number of probes and the fraction of errors in the network for either case. In the noiseless case, we design a algorithm which randomly initializes and then deterministically traverses the network to update a hypothesis state. We show that our algorithm is always within a constant factor of the lower bound for arbitrarily high polynomially many time steps with high probability. For the noisy case, the problem becomes substantially more difficult, and performance will depend on the expansion of the graph. We show that an algorithm which allows at least k ∈ Ω( log(n) 1−λ2 r) probes on every time step never accumulates more than O( n √ r ) errors at any time for arbitrarily high polynomially many time steps, assuming that α−1 ∈ Ω(k), and where (1− λ2) is the spectral gap of the graph. An alternate analysis shows that for quadratically many time steps we can tighten this bound to O( n r(1−α log(n) (1−λ2) ) ) assuming α−1 ∈ Ω(k2). Furthermore, we demonstrate that if the number of errors accumulated at time t exceeds our bounds by a factor of M , then in expectation we return back within the bound in at most O(n log(M)) steps. Since our bounds for the noisy case are not as tight (we lose at least logarithmic factor in the probe-error tradeoff), we demonstrate experimentally that our algorithm in the noisy case appears to perform just as well as the theoretical performance of our algorithm in the noiseless case.
منابع مشابه
Renormalization group for evolving networks.
We propose a renormalization group treatment of stochastically growing networks. As an example, we study percolation on growing scale-free networks in the framework of a real-space renormalization group approach. As a result, we find that the critical behavior of percolation on the growing networks differs from that in uncorrelated networks.
متن کاملEvolutionary Computation for On-line and Off-line Parameter Tuning of Evolving Fuzzy Neural Networks
This work applies Evolutionary Computation to achieve completely self-adapting Evolving Fuzzy Neural Networks (EFuNNs) for operating in both incremental (on-line) and batch (off-line) modes. EFuNNs belong to a class of Evolving Connectionist Systems (ECOS), capable of performing clustering-based, on-line, local area learning and rule extraction. Through EC, its parameters such as learning rates...
متن کاملEvolving Connectionist Systems for On-line, Knowledge-based Learning: Principles and Applications
The paper introduces evolving connectionist systems (ECOS) as an effective approach to building on-line, adaptive intelligent systems. ECOS evolve through incremental, hybrid (supervised/unsupervised), on-line learning. They can accommodate new input data, including new features, new classes, etc. through local element tuning. New connections and new neurons are created during the operation of ...
متن کاملDUNEDIN NEW ZEALAND Evolving Connectionist Systems for On-line, Knowledge-based Learning: Principles and Applications
The paper introduces evolving connectionist systems (ECOS) as an effective approach to building on-line, adaptive intelligent systems. ECOS evolve through incremental, hybrid (supervised/unsupervised), on-line learning. They can accommodate new input data, including new features, new classes, etc. through local element tuning. New connections and new neurons are created during the operation of ...
متن کامل